Goto

Collaborating Authors

 reward model








A Links to Resources

Neural Information Processing Systems

Table 7: Examples of Generated Cartoon Descriptions Type of descriptions GPT -4o Human Written [20] Canny description A knight in armor is riding a horse, holding a lance with a traffic light on top. A line of businessmen in suits follows behind him. There are two men on a horse. They are wearing soldier outfits. Uncanny Description It's unusual to see a medieval knight leading modern businessmen as if going into battle.



Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment

Neural Information Processing Systems

Such reward model serves as a proxy to human preference, and it is critical to guide the RL step towards improving the model quality. In this work, we argue that the SFT stage significantly benefits from learning a reward model as well. Instead of using the human demonstration data directly via supervised learning, we propose to leverage an Inverse Reinforcement Learning (IRL) technique to simultaneously build an reward model and a policy model. This approach leads to new SFT algorithms that are not only efficient to implement, but are robust to the presence of low-quality supervised learning data. Moreover, we discover a connection between the proposed IRL based approach, and a recent line of works called Self-Play Fine-tune (SPIN, Chen et al. [2024]).